4 research outputs found

    Design, Modeling and Control of a Thermal Management System for Hybrid Electric Vehicles

    Get PDF
    Hybrid electric vehicle (HEV) technology has evolved in the last two decades to become economically feasible for mass produced automobiles. With the integration of a lithium battery pack and electric motors, HEVs offer a significantly higher fuel efficiency than traditional vehicles that are driven solely by an internal combustion engine. However, the additional HEV components also introduce new challenges for the powertrain thermal management system design. In addition to the common internal combustion engine, the battery pack, the generator(s), as well as the electric motor(s) are now widely applied in the HEVs and have become new heat sources and they also require proper thermal management. Conventional cooling systems have been typically equipped with a belt driven water pump and radiator fan, as well as other mechanical actuators such as the thermostat valve. The operation of these components is generally determined by the engine speed. This open-loop cooling strategy has a low efficiency and suffers the risk of over-cooling the coolant and components within the system. In advanced thermal management systems, the mechanical elements are upgraded by computer controlled actuators including a servo-motor driven pump, variable speed fans, a smart thermostat, and an electric motor driven compressor. These electrified actuators offer the opportunity to improve temperature tracking and reduce parasitic losses. This dissertation investigates a HEV powertrain thermal management system featuring computer controlled cooling system actuators. A suite of mathematical models have been created to describe the thermal behaviour of the HEV powertrain components. Model based controllers were developed for the vehicle\u27s cooling systems including the battery pack, electric motors, and internal combustion engine. Optimal control theory has been applied to determine the ideal battery cooling air temperature and the desired heat removal rate on e-motor cooling surface. A model predictive controller(MPC) was developed to regulate the refrigerant compressor and track the battery cooling air temperature. A series of Lyapunov-based nonlinear controllers have been implemented to regulate the coolant pumps and radiator fans in the cooling systems for the engine and e-motors. Representative numerical results are presented and discussed. Overall, the proposed control strategies have demonstrated the effectiveness in improving both the temperature tracking performance and the cooling system power consumption reduction. The peak temperature error in the selected A123 battery core can be tracked within 0.25 C of the target; a 50% reduction of the vapor compression system energy consumption can be obtained by properly designing the cooling air flow structure. Similarly, the cooling system of HEV electric motors shows that the machine internal peak temperature can be tracked to the target value with a maximum error of 3.9 C and an average error of 0.13 C. A 70% to 81% cooling system energy consumption reduction can be achieved under different driving cycle comparing to classical controller applied to maintain a similar level of hotspot temperature stabilization. The proposed optimal nonlinear controller tracks the engine coolant temperature with an average error of 0.35 C and at least 13% reduction in engine cooling power. Further, a close analysis on the cooling system energy consumption reduction has been conducted with a heat exchanger simulation tool established for cooling system design optimization. This research has developed the basis for the holistic control of HEV powertrain thermal management systems by including a suite of model based nonlinear controllers to simultaneously regulate the cooling actuators for the battery pack, e-motors, and conventional internal combustion engine. Numerical studies has been conducted with a high fidelity HEV model under real driving cycles to demonstrate the advantages of introducing advanced control theory into multi-mode vehicle drive systems

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore